Scalable Bayesian Meta-Learning through Generalized Implicit Gradients

نویسندگان

چکیده

Meta-learning owns unique effectiveness and swiftness in tackling emerging tasks with limited data. Its broad applicability is revealed by viewing it as a bi-level optimization problem. The resultant algorithmic viewpoint however, faces scalability issues when the inner-level relies on gradient-based iterations. Implicit differentiation has been considered to alleviate this challenge, but restricted an isotropic Gaussian prior, only favors deterministic meta-learning approaches. This work markedly mitigates bottleneck cross-fertilizing benefits of implicit probabilistic Bayesian meta-learning. novel (iBaML) method not broadens scope learnable priors, also quantifies associated uncertainty. Furthermore, ultimate complexity well controlled regardless trajectory. Analytical error bounds are established demonstrate precision efficiency generalized gradient over explicit one. Extensive numerical tests carried out empirically validate performance proposed method.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Scalable Meta-Learning for Bayesian Optimization

Bayesian optimization has become a standard technique for hyperparameter optimization, including data-intensive models such as deep neural networks that may take days or weeks to train. We consider the setting where previous optimization runs are available, and we wish to use their results to warm-start a new optimization run. We develop an ensemble model that can incorporate the results of pas...

متن کامل

Scalable Collaborative Bayesian Preference Learning

Learning about users’ utilities from preference, discrete choice or implicit feedback data is of integral importance in e-commerce, targeted advertising and web search. Due to the sparsity and diffuse nature of data, Bayesian approaches hold much promise, yet most prior work does not scale up to realistic data sizes. We shed light on why inference for such settings is computationally difficult ...

متن کامل

Hierarchical Meta-Rules for Scalable Meta-Learning

The Pairwise Meta-Rules (PMR) method proposed in [18] has been shown to improve the predictive performances of several metalearning algorithms for the algorithm ranking problem. Given m target objects (e.g., algorithms), the training complexity of the PMR method with respect to m is quadratic: ( m 2 ) = m × (m − 1)/2. This is usually not a problem when m is moderate, such as when ranking 20 dif...

متن کامل

Generalized entropies through Bayesian estimation

The demand made upon computational analysis of observed symbolic sequences has been increasing in the last decade. Here, the concept of entropy receives applications, and the generalizations according to Tsallis H (T) q and R enyi H (R) q provide whole-spectra of entropies characterized by an order q. An enduring practical problem lies in the estimation of these entropies from observed data. Th...

متن کامل

Scalable Bayesian Reinforcement Learning for Multiagent POMDPs

Bayesian methods for reinforcement learning (RL) allow model uncertainty to be considered explicitly and offer a principled way of dealing with the exploration/exploitation tradeoff. However, for multiagent systems there have been few such approaches, and none of them apply to problems with state uncertainty. In this paper, we fill this gap by proposing a Bayesian RL framework for multiagent pa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i9.26337